Multi-view Graph Convolutional Networks with Differentiable Node Selection
نویسندگان
چکیده
Multi-view data containing complementary and consensus information can facilitate representation learning by exploiting the intact integration of multi-view features. Because most objects in real world often have underlying connections, organizing as heterogeneous graphs is beneficial to extracting latent among different objects. Due powerful capability gather neighborhood nodes, this paper, we apply Graph Convolutional Network (GCN) cope with graph originating from data, which still under-explored field GCN. In order improve quality network topology alleviate interference noises yielded fusion, some methods undertake sorting operations before convolution procedure. These GCN-based generally sort select confident nodes for each vertex, such picking top- k according pre-defined confidence values. Nonetheless, problematic due non-differentiable operators inflexible embedding learning, may result blocked gradient computations undesired performance. To these issues, propose a joint framework dubbed Differentiable Node Selection (MGCN-DNS), constituted an adaptive fusion layer, module differentiable node selection schema. MGCN-DNS accepts multi-channel graph-structural inputs aims learn more robust through neural network. The effectiveness proposed method verified rigorous comparisons considerable state-of-the-art approaches terms semi-supervised classification tasks, experimental results indicate that achieves pleasurable performance on several benchmark datasets.
منابع مشابه
Convolutional Neural Networks Via Node-Varying Graph Filters
Convolutional neural networks (CNNs) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks. Since two of the key operations that CNNs implement are convolution and pooling, this type of networks is implicitly designed to act on data described by regular structures such as images. Motivated by the recent interest...
متن کاملCategory-Specific Salient View Selection via Deep Convolutional Neural Networks
In this paper, we present a new framework to determine up front orientations and detect salient views of 3D models. The salient viewpoint to human preferences is the most informative projection with correct upright orientation. Our method utilizes two Convolutional Neural Network (CNN) architectures to encode category-specific information learnt from a large number of 3D shapes and 2D images on...
متن کاملDynamic Graph Convolutional Networks
Many different classification tasks need to manage structured data, which are usually modeled as graphs. Moreover, these graphs can be dynamic, meaning that the vertices/edges of each graph may change during time. Our goal is to jointly exploit structured data and temporal information through the use of a neural network model. To the best of our knowledge, this task has not been addressed using...
متن کاملGraph Convolutional Networks
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden lay...
متن کامل3D multi-view convolutional neural networks for lung nodule classification
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architectu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Transactions on Knowledge Discovery From Data
سال: 2023
ISSN: ['1556-472X', '1556-4681']
DOI: https://doi.org/10.1145/3608954